skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Cheney, Nick"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available July 13, 2026
  2. Freezing layers in deep neural networks has been shown to enhance generalization and accelerate training, yet the underlying mechanisms remain unclear. This paper investigates the impact of frozen layers from the perspective of linear separability, examining how untrained, randomly initialized layers influence feature representations and model performance. Using multilayer perceptrons trained on MNIST, CIFAR-10, and CIFAR-100, we systematically analyze the effects freezing layers and network architecture. While prior work attributes the benefits of frozen layers to Cover’s theorem, which suggests that nonlinear transformations improve linear separability, we find that this explanation is insufficient. Instead, our results indicate that the observed improvements in generalization and convergence stem from other mechanisms. We hypothesize that freezing may have similar effects to other regularization techniques and that it may smooth the loss landscape to facilitate training. Furthermore, we identify key architectural factors---such as network overparameterization and use of skip connections---that modulate the effectiveness of frozen layers. These findings offer new insights into the conditions under which freezing layers can optimize deep learning performance, informing future work on neural architecture search. 
    more » « less
    Free, publicly-accessible full text available June 3, 2026
  3. This work identifies a simple pre-training mechanism that leads to representations exhibiting better continual and transfer learning. This mechanism—the repeated resetting of weights in the last layer, which we nickname “zapping”—was originally designed for a meta-continual-learning procedure, yet we show it is surprisingly applicable in many settings beyond both meta-learning and continual learning. In our experiments, we wish to transfer a pre-trained image classifier to a new set of classes, in few shots. We show that our zapping procedure results in improved transfer accuracy and/or more rapid adaptation in both standard fine-tuning and continual learning settings, while being simple to implement and computationally efficient. In many cases, we achieve performance on par with state of the art meta-learning without needing the expensive higher-order gradients by using a combination of zapping and sequential learning. An intuitive explanation for the effectiveness of this zapping procedure is that representations trained with repeated zapping learn features that are capable of rapidly adapting to newly initialized classifiers. Such an approach may be considered a computationally cheaper type of, or alternative to, meta-learning rapidly adaptable features with higher-order gradients. This adds to recent work on the usefulness of resetting neural network parameters during training, and invites further investigation of this mechanism. 
    more » « less
  4. Soft robotics is a rapidly growing area of robotics research that would benefit greatly from design automation, given the challenges of manually engineering complex, compliant, and generally non-intuitive robot body plans and behaviors. It has been suggested that a major hurdle currently limiting soft robot brain-body co-optimization is the fragile specialization between a robot's controller and the particular body plan it controls, resulting in premature convergence. Here we posit that modular controllers are more robust to changes to a robot's body plan. We demonstrate a decreased reduction in locomotion performance after morphological mutations to soft robots with modular controllers, relative to those with similar global controllers - leading to fitter offspring. Moreover, we show that the increased transferability of modular controllers to similar body plans enables more effective brain-body co-optimization of soft robots, resulting in an increased rate of positive morphological mutations and higher overall performance of evolved robots. We hope that this work helps provide specific methods to improve soft robot design automation in this particular setting, while also providing evidence to support our understanding of the challenges of brain-body co-optimization more generally. 
    more » « less
  5. In environments that vary frequently and unpredictably, bet-hedgers can overtake the population. Diversifying bet-hedgers have a diverse set of offspring so that, no matter the conditions they find themselves in, at least some offspring will have high fitness. In contrast, conservative bet-hedgers have a set of offspring that all have an in-between phenotype compared to the specialists. Here, we use an evolutionary algorithm of gene regulatory networks to de novo evolve the two strategies and investigate their relative success in different parameter settings. We found that diversifying bet-hedgers almost always evolved first, but then eventually got outcompeted by conservative bet-hedgers. We argue that even though similar selection pressures apply to the two bet-hedger strategies, conservative bet-hedgers could win due to the robustness of their evolved networks, in contrast to the sensitive networks of the diversifying bet-hedgers. These results reveal an unexplored aspect of the evolution of bet-hedging that could shed more light on the principles of biological adaptation in variable environmental conditions. 
    more » « less